首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4395篇
  免费   226篇
  国内免费   9篇
电工技术   54篇
综合类   3篇
化学工业   898篇
金属工艺   84篇
机械仪表   67篇
建筑科学   267篇
矿业工程   4篇
能源动力   194篇
轻工业   468篇
水利工程   30篇
石油天然气   8篇
无线电   382篇
一般工业技术   823篇
冶金工业   423篇
原子能技术   32篇
自动化技术   893篇
  2023年   42篇
  2022年   22篇
  2021年   120篇
  2020年   83篇
  2019年   100篇
  2018年   109篇
  2017年   101篇
  2016年   138篇
  2015年   125篇
  2014年   159篇
  2013年   284篇
  2012年   287篇
  2011年   341篇
  2010年   295篇
  2009年   265篇
  2008年   276篇
  2007年   238篇
  2006年   206篇
  2005年   177篇
  2004年   155篇
  2003年   135篇
  2002年   120篇
  2001年   60篇
  2000年   74篇
  1999年   53篇
  1998年   66篇
  1997年   54篇
  1996年   49篇
  1995年   44篇
  1994年   51篇
  1993年   34篇
  1992年   26篇
  1991年   27篇
  1990年   23篇
  1989年   29篇
  1988年   16篇
  1987年   25篇
  1986年   21篇
  1985年   21篇
  1984年   22篇
  1983年   19篇
  1982年   23篇
  1981年   16篇
  1980年   13篇
  1979年   18篇
  1978年   9篇
  1977年   15篇
  1976年   10篇
  1974年   7篇
  1971年   6篇
排序方式: 共有4630条查询结果,搜索用时 15 毫秒
991.
We present a level of detail (LOD) method designed for tree branches. It can be combined with methods for processing tree foliage to facilitate navigation through large virtual forests. Starting from a skeletal representation of a tree, we fit polygon meshes of various densities to the skeleton while the mesh density is adjusted according to the required visual fidelity. For distant models, these branch meshes are gradually replaced with semi‐transparent lines until the tree recedes to a few lines. Construction of these complete LOD models is guided by error metrics to ensure smooth transitions between adjacent LOD models. We then present an instancing technique for discrete LOD branch models, consisting of polygon meshes plus semi‐transparent lines. Line models with different transparencies are instanced on the GPU by merging multiple tree samples into a single model. Our technique reduces the number of draw calls in GPU and increases rendering performance. Our experiments demonstrate that large‐scale forest scenes can be rendered with excellent detail and shadows in real time.  相似文献   
992.
Recent years have witnessed extensive studies of graph classification due to the rapid increase in applications involving structural data and complex relationships. To support graph classification, all existing methods require that training graphs should be relevant (or belong) to the target class, but cannot integrate graphs irrelevant to the class of interest into the learning process. In this paper, we study a new universum graph classification framework which leverages additional “non-example” graphs to help improve the graph classification accuracy. We argue that although universum graphs do not belong to the target class, they may contain meaningful structure patterns to help enrich the feature space for graph representation and classification. To support universum graph classification, we propose a mathematical programming algorithm, ugBoost, which integrates discriminative subgraph selection and margin maximization into a unified framework to fully exploit the universum. Because informative subgraph exploration in a universum setting requires the search of a large space, we derive an upper bound discriminative score for each subgraph and employ a branch-and-bound scheme to prune the search space. By using the explored subgraphs, our graph classification model intends to maximize the margin between positive and negative graphs and minimize the loss on the universum graph examples simultaneously. The subgraph exploration and the learning are integrated and performed iteratively so that each can be beneficial to the other. Experimental results and comparisons on real-world dataset demonstrate the performance of our algorithm.  相似文献   
993.
Heterogeneity, parallelization and vectorization are key techniques to improve the performance and energy efficiency of modern computing systems. However, programming and maintaining code for these architectures poses a huge challenge due to the ever-increasing architecture complexity. Task-based environments hide most of this complexity, improving scalability and usage of the available resources. In these environments, while there has been a lot of effort to ease parallelization and improve the usage of heterogeneous resources, vectorization has been considered a secondary objective. Furthermore, there has been a swift and unstoppable burst of vector architectures at all market segments, from embedded to HPC. Vectorization can no longer be ignored, but manual vectorization is tedious, error-prone and not practical for the average programmer. This work evaluates the feasibility of user-directed vectorization in task-based applications. Our evaluation is based on the OmpSs programming model, extended to support user-directed vectorization for different SIMD architectures (i.e., SSE, AVX2, AVX512). Results show that user-directed codes achieve manually optimized code performance and energy efficiency with minimal code modifications, favoring portability across different SIMD architectures.  相似文献   
994.
Dictionaries are very useful objects for data analysis, as they enable a compact representation of large sets of objects through the combination of atoms. Dictionary‐based techniques have also particularly benefited from the recent advances in machine learning, which has allowed for data‐driven algorithms to take advantage of the redundancy in the input dataset and discover relations between objects without human supervision or hard‐coded rules. Despite the success of dictionary‐based techniques on a wide range of tasks in geometric modeling and geometry processing, the literature is missing a principled state‐of‐the‐art of the current knowledge in this field. To fill this gap, we provide in this survey an overview of data‐driven dictionary‐based methods in geometric modeling. We structure our discussion by application domain: surface reconstruction, compression, and synthesis. Contrary to previous surveys, we place special emphasis on dictionary‐based methods suitable for 3D data synthesis, with applications in geometric modeling and design. Our ultimate goal is to enlight the fact that these techniques can be used to combine the data‐driven paradigm with design intent to synthesize new plausible objects with minimal human intervention. This is the main motivation to restrict the scope of the present survey to techniques handling point clouds and meshes, making use of dictionaries whose definition depends on the input data, and enabling shape reconstruction or synthesis through the combination of atoms.  相似文献   
995.
To describe the interfacial dynamics between two phases using the phase-field method, the interfacial region needs to be close enough to a sharp interface so as to reproduce the correct physics. Due to the high gradients of the solution within the interfacial region and consequent high computational cost, the use of the phase-field method has been limited to the small-scale problems whose characteristic length is similar to the interfacial thickness. By using finer mesh at the interface and coarser mesh in the rest of computational domain, the phase-field methods can handle larger scale of problems with realistic interface thicknesses. In this work, a C1 continuous h-adaptive mesh refinement technique with the least-squares spectral element method is presented. It is applied to the Navier–Stokes-Cahn–Hilliard (NSCH) system and the isothermal Navier–Stokes–Korteweg (NSK) system. Hermite polynomials are used to give global differentiability in the approximated solution, and a space–time coupled formulation and the element-by-element technique are implemented. Two refinement strategies based on the solution gradient and the local error estimators are suggested, and they are compared in two numerical examples.  相似文献   
996.
Reservoir computing is a bio-inspired computing paradigm for processing time dependent signals. The performance of its analogue implementations matches other digital algorithms on a series of benchmark tasks. Their potential can be further increased by feeding the output signal back into the reservoir, which would allow to apply the algorithm to time series generation. This requires, in principle, implementing a sufficiently fast readout layer for real-time output computation. Here we achieve this with a digital output layer driven by a FPGA chip. We demonstrate the first opto-electronic reservoir computer with output feedback and test it on two examples of time series generation tasks: frequency and random pattern generation. We obtain very good results on the first task, similar to idealised numerical simulations. The performance on the second one, however, suffers from the experimental noise. We illustrate this point with a detailed investigation of the consequences of noise on the performance of a physical reservoir computer with output feedback. Our work thus opens new possible applications for analogue reservoir computing and brings new insights on the impact of noise on the output feedback.  相似文献   
997.
998.
999.
A 2-Step sinter/anneal treatment has been reported previously for forming porous CPP as biodegradable bone substitutes [9]. During the 2-Step annealing treatment, the heat treatment used strongly affected the rate of CPP degradation in vitro. In the present study, x-ray diffraction and 31P solid state nuclear magnetic resonance were used to determine the phases that formed using different heat treating processes. The effect of in vitro degradation (in PBS at 37 °C, pH 7.1 or 4.5) was also studied. During CPP preparation, β-CPP and γ-CPP were identified in powders formed from a calcium monobasic monohydrate precursor after an initial calcining treatment (10 h at 500 °C). Melting of this CPP powder (at 1100 °C), quenching and grinding formed amorphous CPP powders. Annealing powders at 585 °C (Step-1) resulted in rapid sintering to form amorphous porous CPP. Continued annealing to 650 °C resulted in crystallization to form a multi-phase structure of β-CPP primarily plus lesser amounts of α-CPP, calcium ultra-phosphates and retained amorphous CPP. Annealing above 720 °C and up to 950 °C transformed this to β-CPP phase. In vitro degradation of the 585 °C (Step-1 only) and 650 °C Step-2 annealed multi-phase samples occurred significantly faster than the β-CPP samples formed by Step-2 annealing at or above 720 °C. This faster degradation was attributable to preferential degradation of thermodynamically less stable phases that formed in samples annealed at 650 °C (i.e. α-phase, ultra-phosphate and amorphous CPP). Degradation in lower pH solutions significantly increased degradation rates of the 585 and 650 °C annealed samples but had no significant effect on the β-CPP samples.  相似文献   
1000.
This study evaluates the potential of object-based image analysis in combination with supervised machine learning to identify urban structure type patterns from Landsat Thematic Mapper (TM) images. The main aim is to assess the influence of several critical choices commonly made during the training stage of a learning machine on the classification performance and to give recommendations for classifier-dependent intelligent training. Particular emphasis is given to assess the influence of size and class distribution of the training data, the approach of training data sampling (user-guided or random) and the type of training samples (squares or segments) on the classification performance of a Support Vector Machine (SVM). Different feature selection algorithms are compared and segmentation and classifier parameters are dynamically tuned for the specific image scene, classification task, and training data. The performance of the classifier is measured against a set of reference data sets from manual image interpretation and furthermore compared on the basis of landscape metrics to a very high resolution reference classification derived from light detection and ranging (lidar) measurements. The study highlights the importance of a careful design of the training stage and dynamically tuned classifier parameters, especially when dealing with noisy data and small training data sets. For the given experimental set-up, the study concludes that given optimized feature space and classifier parameters, training an SVM with segment-shaped samples that were sampled in a guided manner and are balanced between the classes provided the best classification results. If square-shaped samples are used, a random sampling provided better results than a guided selection. Equally balanced sample distributions outperformed unbalanced training sets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号